Overview

Background

During the summer of 2012, wild fires ravaged throughout the Algerian territory covering most of the northern part, especially the coastal cities. This disaster was due to the higher than average temperatures which reached as high as 50 degrees Celcius.

Objectives

One important measure against the reproduction of such disasters is the ability to predict their occurrence. Moreover, in this project, we will attempt to predict these forest fires based on multiple features related to weather indices.

Dataset Description

The Dataset we will use to train and test our models consists of 244 observations on two Algerian Wilayas (cities): Sidi-Bel Abbes and Bejaia. The observations have been gathered throughout the duration of 4 months from June to September 2012 for both cities.

The Dataset contains the following variables:

  1. Date: (DD/MM/YYYY) Day, month (‘june’ to ‘september’), year (2012)
  2. Temp: temperature noon (temperature max) in Celsius degrees: 22 to 42
  3. RH: Relative Humidity in %: 21 to 90
  4. Ws: Wind speed in km/h: 6 to 29
  5. Rain: total day in mm: 0 to 16.8
    FWI Components (check this LINK for more information)
  6. Fine Fuel Moisture Code (FFMC) index from the FWI system: 28.6 to 92.5
  7. Duff Moisture Code (DMC) index from the FWI system: 1.1 to 65.9
  8. Drought Code (DC) index from the FWI system: 7 to 220.4
  9. Initial Spread Index (ISI) index from the FWI system: 0 to 18.5
  10. Build-up Index (BUI) index from the FWI system: 1.1 to 68
  11. Fire Weather Index (FWI) Index: 0 to 31.1
  12. Classes: two classes, namely “fire” and “not fire”

Exploratory Data Analysis

We first start off by importing the necessary libraries for our analysis.

[INSERT DESCRIPTION OF EACH LIBRARY]

Importing the data

The Dataset provided to us was in the form of a .csv file that contained two tables, one table for the observations belonging to the Sidi-Bel Abbes region, and the other for Bejaia.

Before starting our analysis we separated the tables into two distinct files according to the region. We named both files Algerian_forest_fires_dataset_Bejaia.csv and Algerian_forest_fires_dataset_Sidi_Bel_Abbes.csv for Bejaia and Sidi-Bel Abbes respectively.

Cleaning and processing the data

We first check the existence of null values in the Dataset, none were found.

colSums(is.na(df_b))
        day       month        year Temperature          RH 
          0           0           0           0           0 
         Ws        Rain        FFMC         DMC          DC 
          0           0           0           0           0 
        ISI         BUI         FWI     Classes 
          0           0           0           0 
colSums(is.na(df_s))
        day       month        year Temperature          RH 
          0           0           0           0           0 
         Ws        Rain        FFMC         DMC          DC 
          0           0           0           0           0 
        ISI         BUI         FWI     Classes 
          0           0           0           0 

We then process to add a column in both datasets to indicate the region(Wilaya) in each table. We chose the following encoding:

  1. Bejaia = 0
  2. Sidi-Bel Abbes = 1
df_b[["Region"]] = 0
df_s[["Region"]] = 1

After that, we proceed to merge both our datasets into one single dataframe using full_join(), this will allow us to easily explore and analyze the data.

df_s$DC <- as.double(df_s$DC)
Warning: NAs introduced by coercion
df_s$FWI <- as.double(df_s$FWI)
Warning: NAs introduced by coercion
df = full_join(df_s, df_b)
Joining, by = c("day", "month", "year", "Temperature", "RH", "Ws", "Rain", "FFMC", "DMC", "DC", "ISI", "BUI", "FWI", "Classes", "Region")
dim(df)
[1] 244  15
str(df)
'data.frame':   244 obs. of  15 variables:
 $ day        : int  1 2 3 4 5 6 7 8 9 10 ...
 $ month      : int  6 6 6 6 6 6 6 6 6 6 ...
 $ year       : int  2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 ...
 $ Temperature: int  32 30 29 30 32 35 35 28 27 30 ...
 $ RH         : int  71 73 80 64 60 54 44 51 59 41 ...
 $ Ws         : int  12 13 14 14 14 11 17 17 18 15 ...
 $ Rain       : num  0.7 4 2 0 0.2 0.1 0.2 1.3 0.1 0 ...
 $ FFMC       : num  57.1 55.7 48.7 79.4 77.1 83.7 85.6 71.4 78.1 89.4 ...
 $ DMC        : num  2.5 2.7 2.2 5.2 6 8.4 9.9 7.7 8.5 13.3 ...
 $ DC         : num  8.2 7.8 7.6 15.4 17.6 26.3 28.9 7.4 14.7 22.5 ...
 $ ISI        : num  0.6 0.6 0.3 2.2 1.8 3.1 5.4 1.5 2.4 8.4 ...
 $ BUI        : num  2.8 2.9 2.6 5.6 6.5 9.3 10.7 7.3 8.3 13.1 ...
 $ FWI        : num  0.2 0.2 0.1 1 0.9 3.1 6 0.8 1.9 10 ...
 $ Classes    : chr  "not fire   " "not fire   " "not fire   " "not fire   " ...
 $ Region     : num  1 1 1 1 1 1 1 1 1 1 ...
summary(df)
      day            month          year     
 Min.   : 1.00   Min.   :6.0   Min.   :2012  
 1st Qu.: 8.00   1st Qu.:7.0   1st Qu.:2012  
 Median :16.00   Median :7.5   Median :2012  
 Mean   :15.75   Mean   :7.5   Mean   :2012  
 3rd Qu.:23.00   3rd Qu.:8.0   3rd Qu.:2012  
 Max.   :31.00   Max.   :9.0   Max.   :2012  
                                             
  Temperature          RH              Ws      
 Min.   :22.00   Min.   :21.00   Min.   : 6.0  
 1st Qu.:30.00   1st Qu.:52.00   1st Qu.:14.0  
 Median :32.00   Median :63.00   Median :15.0  
 Mean   :32.17   Mean   :61.94   Mean   :15.5  
 3rd Qu.:35.00   3rd Qu.:73.25   3rd Qu.:17.0  
 Max.   :42.00   Max.   :90.00   Max.   :29.0  
                                               
      Rain              FFMC            DMC       
 Min.   : 0.0000   Min.   :28.60   Min.   : 0.70  
 1st Qu.: 0.0000   1st Qu.:72.08   1st Qu.: 5.80  
 Median : 0.0000   Median :83.50   Median :11.30  
 Mean   : 0.7607   Mean   :77.89   Mean   :14.67  
 3rd Qu.: 0.5000   3rd Qu.:88.30   3rd Qu.:20.75  
 Max.   :16.8000   Max.   :96.00   Max.   :65.90  
                                                  
       DC              ISI              BUI       
 Min.   :  6.90   Min.   : 0.000   Min.   : 1.10  
 1st Qu.: 12.35   1st Qu.: 1.400   1st Qu.: 6.00  
 Median : 33.10   Median : 3.500   Median :12.25  
 Mean   : 49.43   Mean   : 4.774   Mean   :16.66  
 3rd Qu.: 69.10   3rd Qu.: 7.300   3rd Qu.:22.52  
 Max.   :220.40   Max.   :19.000   Max.   :68.00  
 NA's   :1                                        
      FWI           Classes              Region   
 Min.   : 0.000   Length:244         Min.   :0.0  
 1st Qu.: 0.700   Class :character   1st Qu.:0.0  
 Median : 4.200   Mode  :character   Median :0.5  
 Mean   : 7.035                      Mean   :0.5  
 3rd Qu.:11.450                      3rd Qu.:1.0  
 Max.   :31.100                      Max.   :1.0  
 NA's   :1                                        
unique(df$year)
[1] 2012
unique(df$month)
[1] 6 7 8 9

We check again for any NA values that might have been introduced into the dataset by merging the data from both tables, we found out there was one row that contained NA value in DC and FWI. We delete that row since it will not affect our overall dataset.

colSums(is.na(df))
        day       month        year Temperature          RH 
          0           0           0           0           0 
         Ws        Rain        FFMC         DMC          DC 
          0           0           0           0           1 
        ISI         BUI         FWI     Classes      Region 
          0           0           1           0           0 
df = df %>% drop_na(DC)
dim(df)
[1] 243  15

We now proceed to display the different range of values some categorical variables might contain, mainly the Classes and the Region columns.

unique(df$Classes)
[1] "not fire   "   "fire   "       "not fire     "
[4] "not fire    "  "fire"          "fire "        
[7] "not fire"      "not fire "    
unique(df$Region)
[1] 1 0

We find that the Classes column has values that contain unneeded space characters, we proceed to trim those spaces.

df$Classes <- trimws(df$Classes, which = c("both"))
unique(df$Classes)
[1] "not fire" "fire"    
df = df %>% drop_na(Classes)
df$Classes <- mapvalues(df$Classes, from=c("not fire","fire"), to=c(0,1))
unique(df$Classes)
[1] "0" "1"
df$Classes <- as.numeric(df$Classes)
st(df)
df <- df[-c(3)]

df_scaled = df
df_scaled[-c(1,2,13,14)] <- scale(df[-c(1,2,13,14)])
st(df_scaled)

Visualizing the data

We have ended up with a clean and scaled dataframe named df_scaled, which we will use to visualize and further explore our data.

Our first instinct is to compare the two regions together in terms of number of fires, and average temperature.

aggregate(df$Classes ~ df$Region, FUN = sum)
aggregate(df$Temperature ~ df$Region, FUN = mean)

We used the unscaled dataset to plot the real life values of the temperatures.

df %>%
  group_by(Region) %>%
  summarise(Region = Region, Number_of_fires = sum(Classes), Temperature = mean(Temperature)) %>%
  ggplot(aes(x=Region, y=Number_of_fires, fill = Temperature))+
  geom_col(position='dodge')
`summarise()` has grouped output by 'Region'. You can override using the `.groups` argument.

We can see that the the Sidi-Bel Abbes region has in total a greater number of fires and a higher average temperature throughout the summer of 2012.

Further Analysis

Correlation Matrix

The previous results push us to suspect a positive relationship between the temperature and the likelihood of having a fire. However, we need to investigate all the other variables, which is why we will plot a correlation matrix of the features in the dataset.

corr_mat <- round(cor(df_scaled),2)
p_mat <- cor_pmat(df_scaled)
 
corr_mat <- ggcorrplot(
  corr_mat, 
  hc.order = FALSE, 
  type = "upper",
  outline.col = "white",
)
 
ggplotly(corr_mat)

Feature Selection

We performed feature selection using the Caret package to determine which features are the most important and which are the least.

In this case, we opted for Linear Discriminant Analysis with Stepwise Feature Selection by specifying stepLDA as our method.

The varImp function returns a measure of importance out of 100 for each of the features. According to the official Caret documentation, the importance metric is calculated by conducting a ROC curve analysis on each predictor; a series of cutoffs is applied to the predictor data to predict the class. The AUC is then computed and is used as a measure of variable importance.

# prepare training scheme
df_scaled$Classes = as.factor(df_scaled$Classes)

control <- trainControl(method="repeatedcv", number=10, repeats=3)
# train the model
modelLDA <- train(Classes~., data=df_scaled, method="stepLDA", trControl=control)
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90779;  in: "ISI";  variables (1): ISI 
correctness rate: 0.97251;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.599 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90368;  in: "ISI";  variables (1): ISI 
correctness rate: 0.97273;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.724 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89935;  in: "ISI";  variables (1): ISI 
correctness rate: 0.95887;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.605 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.8987;  in: "ISI";  variables (1): ISI 
correctness rate: 0.95433;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.593 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90823;  in: "ISI";  variables (1): ISI 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.385 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89978;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96364;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.592 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90303;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96342;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.589 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89913;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96364;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.582 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.91342;  in: "ISI";  variables (1): ISI 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.603 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
220 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90455;  in: "ISI";  variables (1): ISI 
correctness rate: 0.95909;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.576 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90433;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96818;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.568 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90866;  in: "ISI";  variables (1): ISI 
correctness rate: 0.97727;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.563 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90411;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96818;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.551 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90909;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96797;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.564 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89957;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96299;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.562 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89416;  in: "ISI";  variables (1): ISI 
correctness rate: 0.95411;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.572 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89481;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96342;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.557 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89957;  in: "ISI";  variables (1): ISI 
correctness rate: 0.95909;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.546 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90433;  in: "ISI";  variables (1): ISI 
correctness rate: 0.98182;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.545 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90866;  in: "ISI";  variables (1): ISI 
correctness rate: 0.95866;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.564 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.9;  in: "ISI";  variables (1): ISI 
correctness rate: 0.97273;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.567 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.91255;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96342;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.557 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90887;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96818;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.548 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90866;  in: "ISI";  variables (1): ISI 
correctness rate: 0.95887;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.555 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89524;  in: "ISI";  variables (1): ISI 
correctness rate: 0.95844;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.556 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89913;  in: "ISI";  variables (1): ISI 
correctness rate: 0.9632;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.553 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
218 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.91775;  in: "ISI";  variables (1): ISI 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.363 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
220 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90455;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96818;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.567 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.89957;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96818;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.544 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
219 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.90433;  in: "ISI";  variables (1): ISI 
correctness rate: 0.96364;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
       0.00        0.00        0.54 
 `stepwise classification', using 10-fold cross-validated correctness rate of method lda'.
243 observations of 13 variables in 2 classes; direction: both
stop criterion: improvement less than 5%.
correctness rate: 0.9005;  in: "ISI";  variables (1): ISI 
correctness rate: 0.967;  in: "FFMC";  variables (2): ISI, FFMC 

 hr.elapsed min.elapsed sec.elapsed 
      0.000       0.000       0.588 
importanceLDA <- varImp(modelLDA, scale=FALSE)

plot(importanceLDA)

We can see that the variables month, Ws, Region, and day are insignificant compared to other features. We will disregard them in our model.

Model Building

For the following models, we will only use the features that were the most significant in our feature selection phase. The selected features are:

  1. Temperature
  2. Rain
  3. FFMC
  4. DMC
  5. DC
  6. ISI
  7. BUI
  8. FWI
  9. RH

Splitting the dataset

We begin by splitting the data into train/test sets with a 80/20 split. This split was chosen by default as a good practice. This will leave us with 191 observations in the training set as well as 52 in the test set. Due to the small nature of the dataset at hand we will later apply cross validation in order to further examine the performance of our models and compare them with each other.


set.seed(6)
split <- sample.split(df_scaled, SplitRatio=0.8)

train_set <- subset(df_scaled, split == "TRUE")
test_set <- subset(df_scaled, split=="FALSE")

dim(train_set)
[1] 191  14
dim(test_set)
[1] 52 14

Logistic Regression

Logistic Regression is considered to be an extension of Linear Regression, in which we predict the qualitative response for an observation. It gives us the probability of a certain observation belonging to a class in binomial classification, but can also be extended to be used for multiple classifications.

Training the model

We first start by fitting our model on the training set.

logistic_model <- glm(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set, family="binomial")
Warning: glm.fit: algorithm did not convergeWarning: glm.fit: fitted probabilities numerically 0 or 1 occurred
logistic_model

Call:  glm(formula = Classes ~ Temperature + Rain + FFMC + DMC + DC + 
    ISI + BUI + FWI + RH, family = "binomial", data = train_set)

Coefficients:
(Intercept)  Temperature         Rain         FFMC  
     215.37       -45.81        47.30        90.66  
        DMC           DC          ISI          BUI  
     -62.72        71.07       342.22       -34.52  
        FWI           RH  
     152.95       -15.04  

Degrees of Freedom: 190 Total (i.e. Null);  181 Residual
Null Deviance:      264.4 
Residual Deviance: 1.455e-07    AIC: 20

[Interpretation on the coefficients]

Testing the model

Since logistic regression gives us the probability of each observation belonging to the 1 class, we will use a 0.5 threshold to transform that probability into a classification of either 0 or 1.

After getting our predictions, we will use the confusion matrix function from the caret library that computes a set of performance matrices including f1-score, recall and precision. Other matrices computed include: sensitivity, specificity, prevalence etc. The official documentation for this function and the formulas for all matrices are found in this link. We will only be interested in the f1-score, recall, precision, accuracy and balanced accuracy.

On the train set
preds_logistic <- predict(logistic_model, train_set, type="response")

preds_logistic <- ifelse(preds_logistic >0.5,1,0)
preds_logistic <- as.factor(preds_logistic)

confusionMatrix(preds_logistic, train_set$Classes,
                mode = "everything",
                positive="1")
Confusion Matrix and Statistics

          Reference
Prediction   0   1
         0  91   0
         1   0 100
                                     
               Accuracy : 1          
                 95% CI : (0.9809, 1)
    No Information Rate : 0.5236     
    P-Value [Acc > NIR] : < 2.2e-16  
                                     
                  Kappa : 1          
                                     
 Mcnemar's Test P-Value : NA         
                                     
            Sensitivity : 1.0000     
            Specificity : 1.0000     
         Pos Pred Value : 1.0000     
         Neg Pred Value : 1.0000     
              Precision : 1.0000     
                 Recall : 1.0000     
                     F1 : 1.0000     
             Prevalence : 0.5236     
         Detection Rate : 0.5236     
   Detection Prevalence : 0.5236     
      Balanced Accuracy : 1.0000     
                                     
       'Positive' Class : 1          
                                     
On the test set
preds_logistic <- predict(logistic_model, test_set, type="response")

preds_logistic <- ifelse(preds_logistic >0.5,1,0)
preds_logistic <- as.factor(preds_logistic)

confusionMatrix(preds_logistic, test_set$Classes,
                mode = "everything",
                positive="1")
Confusion Matrix and Statistics

          Reference
Prediction  0  1
         0 15  1
         1  0 36
                                          
               Accuracy : 0.9808          
                 95% CI : (0.8974, 0.9995)
    No Information Rate : 0.7115          
    P-Value [Acc > NIR] : 4.553e-07       
                                          
                  Kappa : 0.9541          
                                          
 Mcnemar's Test P-Value : 1               
                                          
            Sensitivity : 0.9730          
            Specificity : 1.0000          
         Pos Pred Value : 1.0000          
         Neg Pred Value : 0.9375          
              Precision : 1.0000          
                 Recall : 0.9730          
                     F1 : 0.9863          
             Prevalence : 0.7115          
         Detection Rate : 0.6923          
   Detection Prevalence : 0.6923          
      Balanced Accuracy : 0.9865          
                                          
       'Positive' Class : 1               
                                          

Plotting the ROC curve

As we plot the ROC curve, we can see that the AUC is equal to 0.986 which is almost a perfect classifier. Due to the small size of the dataset we have at hand we cannot make any conclusions yet until we try out the different other statistical learning methods.

ROCPred <- prediction(as.numeric(preds_logistic), test_set$Classes)

ROCPer <- performance(ROCPred, measure="tpr",x.measure="fpr")
auc <- performance(ROCPred, measure = "auc")
auc <- auc@y.values[[1]]
auc
[1] 0.9864865
plot(ROCPer, colorize = TRUE)

Observations

Our statistics show the following:

  1. On the training set: Accuracy is 100%, F1 score is 100%, with one False Negative and no False Positives.
  2. On the testing set: Accuracy is 98%, F1 score is 98.6%, with one False Negative and no False Positives.

LDA

Linear Discriminant Analysis is best used when the decision boundary of our given dataset is assumed to be linear. There are two basic assumptions that LDA takes into consideration:

  1. There is a common variance across all response classes
  2. The distribution of observations in each response class is normal with a class-specific mean, and a common variance

Since LDA assumes that each input variable has the same variance, we will use the standardized data-frame in the train test splits. Each variable in the standardized data-frame has mean of 0 and variance of 1.

Training the model

lda_model = lda(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set, family="binomial")
lda_model
Call:
lda(Classes ~ Temperature + Rain + FFMC + DMC + DC + ISI + BUI + 
    FWI + RH, data = train_set, family = "binomial")

Prior probabilities of groups:
        0         1 
0.4764398 0.5235602 

Group means:
  Temperature       Rain       FFMC        DMC         DC
0  -0.5780859  0.4348542 -0.8150123 -0.6690568 -0.6056867
1   0.4762177 -0.3204676  0.6659827  0.6215861  0.5279307
         ISI        BUI        FWI         RH
0 -0.8222968 -0.6779796 -0.8138050  0.4522544
1  0.6175900  0.6185834  0.6580962 -0.3615521

Coefficients of linear discriminants:
                   LD1
Temperature  0.1021470
Rain         0.1406894
FFMC         1.1880939
DMC         -1.2053965
DC          -0.2464361
ISI          0.5731491
BUI          1.2481795
FWI          0.6869747
RH           0.5112902

[Interpretation on the coefficients]

Testing the model

On the train set

On our trainng data, the model reached an accuracy of 94.2% and an F1 score of 94.4%.

confusionMatrix(preds_lda$class, train_set$Classes,
                mode = "everything",
                positive="1")
Confusion Matrix and Statistics

          Reference
Prediction  0  1
         0 87  7
         1  4 93
                                          
               Accuracy : 0.9424          
                 95% CI : (0.8993, 0.9709)
    No Information Rate : 0.5236          
    P-Value [Acc > NIR] : <2e-16          
                                          
                  Kappa : 0.8847          
                                          
 Mcnemar's Test P-Value : 0.5465          
                                          
            Sensitivity : 0.9300          
            Specificity : 0.9560          
         Pos Pred Value : 0.9588          
         Neg Pred Value : 0.9255          
              Precision : 0.9588          
                 Recall : 0.9300          
                     F1 : 0.9442          
             Prevalence : 0.5236          
         Detection Rate : 0.4869          
   Detection Prevalence : 0.5079          
      Balanced Accuracy : 0.9430          
                                          
       'Positive' Class : 1               
                                          
On the test set

As we can see below, our the number of false positives is 0, and the number of false negatives is 1. The results are very good but the other way around would have been better as we do not want to miss any positives meaning we want to predict all fires. Our model yielded an f1-score of 98.6% and an accuracy of 98%.

Potting the ROC curve

The AUC for LDA was also 0.986, similar to the one for Logistic Regression.

auc
[1] 0.9864865

[INTERPRETATION]

QDA

Quadratic Discriminant Analysis is best used when the decision boundary of our given dataset is assumed to be non-linear. Similarly to LDA, QDA makes two basic assumptions:

  1. There is a different covariance for each of the response classes
  2. The distribution of observations in each response class is normal with a class-specific mean, and a class-specific covariance

Training the model

qda_model
Call:
qda(Classes ~ Temperature + Rain + FFMC + DMC + DC + ISI + BUI + 
    FWI + RH, data = train_set)

Prior probabilities of groups:
        0         1 
0.4764398 0.5235602 

Group means:
  Temperature       Rain       FFMC        DMC         DC
0  -0.5780859  0.4348542 -0.8150123 -0.6690568 -0.6056867
1   0.4762177 -0.3204676  0.6659827  0.6215861  0.5279307
         ISI        BUI        FWI         RH
0 -0.8222968 -0.6779796 -0.8138050  0.4522544
1  0.6175900  0.6185834  0.6580962 -0.3615521

[Interpretation on the coefficients]

Testing the model

On the train set

Our model yields an accuracy of 97.9% and an F1 score of 98% on the training set.

confusionMatrix(preds$class, train_set$Classes,
                mode = "everything",
                positive="1")
Confusion Matrix and Statistics

          Reference
Prediction  0  1
         0 88  1
         1  3 99
                                          
               Accuracy : 0.9791          
                 95% CI : (0.9472, 0.9943)
    No Information Rate : 0.5236          
    P-Value [Acc > NIR] : <2e-16          
                                          
                  Kappa : 0.958           
                                          
 Mcnemar's Test P-Value : 0.6171          
                                          
            Sensitivity : 0.9900          
            Specificity : 0.9670          
         Pos Pred Value : 0.9706          
         Neg Pred Value : 0.9888          
              Precision : 0.9706          
                 Recall : 0.9900          
                     F1 : 0.9802          
             Prevalence : 0.5236          
         Detection Rate : 0.5183          
   Detection Prevalence : 0.5340          
      Balanced Accuracy : 0.9785          
                                          
       'Positive' Class : 1               
                                          
On the test set

As we can see below, our the number of false positives is 1, and the number of false negatives is 1. The results are very good but the other way around would have been better as we do not want to miss any positives meaning we want to predict all fires. Our model yielded an f1-score of 97.3% and an accuracy of 96%.

confusionMatrix(preds_qda$class, test_set$Classes,
                mode = "everything",
                positive="1")
Confusion Matrix and Statistics

          Reference
Prediction  0  1
         0 14  1
         1  1 36
                                          
               Accuracy : 0.9615          
                 95% CI : (0.8679, 0.9953)
    No Information Rate : 0.7115          
    P-Value [Acc > NIR] : 4.949e-06       
                                          
                  Kappa : 0.9063          
                                          
 Mcnemar's Test P-Value : 1               
                                          
            Sensitivity : 0.9730          
            Specificity : 0.9333          
         Pos Pred Value : 0.9730          
         Neg Pred Value : 0.9333          
              Precision : 0.9730          
                 Recall : 0.9730          
                     F1 : 0.9730          
             Prevalence : 0.7115          
         Detection Rate : 0.6923          
   Detection Prevalence : 0.7115          
      Balanced Accuracy : 0.9532          
                                          
       'Positive' Class : 1               
                                          

Potting the ROC curve

After plotting the ROC curve we got an AUC of 0.953, which is a bit worse than the one in Logistic Regression and LDA.

auc
[1] 0.9531532

KNN Classifier

In this section, we will explore KNN’s performance on our problem. We will use hyper parameter tuning to determine the best number of nearest numbers (K) and we will also use repeated cross validation in our training for better performance estimation.

Since KNN is a distance based model, we will here again use our normalized dataset instead of the original. Once again the scaling technique used was the standard scaling using the function scale().

Training the model

Setting up the Cross-Validation for Hyperparameter tuning

The summaryFunction argument determines which metric to use to determine the performance of a particular hyperparameter setting. Here we shall use defaultSummary which calculates accuracy and kappa statistic.

We have opted to go with the repeated 10 fold cross-validation method repeated 10 times. ClassProbs parameter is set to TRUE and we can set the threshold later when we test our model performance.

training_control <- trainControl(method = "repeatedcv",
                                 summaryFunction = defaultSummary,
                                 classProbs = TRUE,
                                 number = 10,
                                 repeats = 10)
Training with Cross-validation

Now we use the train() function to perform the model training/tuning of the k hyper-parameter. The range of k is from 3 to 85 in steps of 2 meaning we will only have odd values of k only as it is best practice for the KNN clustering.

Another tweak that we need to make on our data-set is to change our target variable values to valid R variable names in order for the KNN algorithm to work with class Probabilities as each values of our target variable will become a variable with its own probability values. Leaving the values as {0,1} will throw an error at us, therefore we will set our Classes variable values back to ‘Fire’ and ‘Not_Fire’ and proceed.

train_set$Classes <- mapvalues(train_set$Classes, from=c(0,1), to=c("not_fire","fire"))
test_set$Classes <- mapvalues(test_set$Classes, from=c(0,1), to=c("not_fire","fire"))
knn_cv <- train(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, 
                data = train_set,
                method = "knn",
                trControl = training_control,
                metric = "Accuracy",
                tuneGrid = data.frame(k = seq(3, 85, by = 2)))
knn_cv
Distribution of predicted probabilites -threshold inspection-

Inspecting the probabilities reveals that a cutoff probability around 0.5 give good classification results.

preds = predict(knn_cv,train_set, type = "prob")
train_set %>%
  ggplot() +
  aes(x = preds$fire, fill = Classes) +
  geom_histogram(bins = 20) +
  labs(x = "Probability", y = "Count", title = "Distribution of predicted probabilities for value fire" )
Plotting the training ROC curve
pROC_train <- roc(response = train_set$Classes, predictor = preds[,"fire"],
                quiet = TRUE,
                plot = TRUE, 
                percent = TRUE,
                auc.polygon = TRUE, 
                print.auc = TRUE, 
                print.thres = TRUE,
                print.thres.best.method = "youden")

Testing the model

preds = predict(knn_cv,test_set, type="raw")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive="fire")

Potting the ROC curve on Test results

preds <- as.vector(preds, mode = "numeric")
test_set$Classes <- as.vector(test_set$Classes, mode = "numeric")

pred <- prediction(preds, test_set$Classes) 

perf <- performance(pred,"tpr","fpr")
plot(perf,colorize=TRUE)

Ensemble Methods / Tree based methods

Simple Decision Trees and tree pruning

The goal of ensemble modeling is to improve performance over a baseline model by combining multiple models. So, we will set the baseline performance measure by starting with one algorithm. In our case, we will build a simple decision tree.

Decision trees are widely used classifiers in industries based on their transparency in describing rules that lead to a prediction. They are arranged in a hierarchical tree-like structure and are simple to understand and interpret. They are not susceptible to outliers and are able to capture nonlinear relationships.

We will be using the rpart library for creating decision trees. rpart stands for recursive partitioning and employs the CART (classification and regression trees) algorithm. Apart from the rpart library, there are many other decision tree libraries like C50, Party, Tree, and mapTree.

Training the model with Pruning

Next, we create a decision tree model by calling the rpart function. Let’s first create a base model with default parameters and value.

Prepruning is also known as early stopping criteria. As the name suggests, the criteria are set as parameter values while building the rpart model. Below are some of the pre-pruning criteria that can be used. The tree stops growing when it meets any of these pre-pruning criteria, or it discovers the pure classes.

The complexity parameter (cp) in rpart is the minimum improvement in the model needed at each node. It’s based on the cost complexity of the model and works as follows:

  • For the given tree, add up the misclassification at every terminal node.
  • Then multiply the number of splits time a penalty term (lambda) and add it to the total misclassification.
  • The lambda is determined through cross-validation and not reported in R.
  • The cp we see using printcp() is the scaled version of lambda over the misclassifcation rate of the overall data.

The cp value is a stopping parameter. It helps speed up the search for splits because it can identify splits that don’t meet this criteria and prune them before going too far.

Other parameters include but are not limited to:

  • maxdepth: This parameter is used to set the maximum depth of a tree. In this prepruning step.

  • minsplit: It is the minimum number of records that must exist in a node for a split to happen or be attempted.

And one last thing, since we are in a classification setting, we have to specify class as the method used for building our tree instead of ‘anova’ that is used in regression settings.

base_model <- rpart(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data = train_set, method = "class",  control = rpart.control(cp = 0, maxdepth = 8, minsplit = 8))
summary(base_model)
#Plot Decision Tree
rpart.plot(base_model)

The summary of our base model will give us the details of each split with the number of observations, the value of the complexity parameter, the predicted class, the class counts with their probabilites and the children of the node. It will also give details about the future splits starting with the primary splits that will follow and the percent improvement in the prediction as well as the surrogate splits that come later on.

The resulting tree as explained in the above section, is the smallest tree with the lowest miss-classification loss. This tree is plotted with the split details and leaf node classes.

The optimal CP value found was 0.96341463.

preds = predict(base_model,train_set, type="class")
confusionMatrix(preds, train_set$Classes,
                mode = "everything",
                positive='1')

The train accuracy of our tree is 99% with an F1 score of 99% as well with a total of 2 misclassifications, 1 FP and 1 FN.

Testing the model

We have a 100% accuracy on our held-out validation set which means we have successfully avoided overfitting using tree pruning.

preds = predict(base_model,test_set, type="class")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive='1')
ROC curve
preds <- as.vector(preds, mode = "numeric")
test_set$Classes <- as.vector(test_set$Classes, mode = "numeric")

pred <- prediction(preds, test_set$Classes) 

perf <- performance(pred,"tpr","fpr")
plot(perf,colorize=TRUE)

BAGGING

Bagging, or bootstrap aggregation, is an ensemble method that involves training the same algorithm many times by using different subsets sampled from the training data. The final output prediction is then averaged across the predictions of all the sub-models. The two most popular bagging ensemble techniques are Bagged Decision Trees and Random Forest.

Bagged Decision Trees

This method performs best with algorithms that have high variance. The argument method=“treebag” specifies the algorithm. We will train our model using a 5-fold cross validation repeated 5 times. The sampling strategy used for the bagged trees is ROSE.

Training the model
control <- trainControl(method="repeatedcv", number=5, repeats=5)

bagCART_model <- train(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set, method="treebag", metric="Accuracy", trControl=control)
Training accuracy
preds = predict(bagCART_model,train_set, type="raw")
confusionMatrix(preds, train_set$Classes,
                mode = "everything",
                positive='1')
Testing accuracy
preds = predict(bagCART_model,test_set, type="raw")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive='1')
Random Forests

Random Forest is an extension of bagged decision trees, where in addition to sampling the data, we also sample the variables in each bagged decision tree. The trees are constructed with the objective of reducing the correlation between the individual decision trees by making sure we do not use the same strong predictors in all bagged trees resulting in strongly correlated trees.

Training the model
control <- trainControl(method="repeatedcv", number=5, repeats=5)

rf_model <- train(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set, method="rf", metric="Accuracy", trControl=control)
Training accuracy
preds = predict(rf_model,train_set, type="raw")
confusionMatrix(preds, train_set$Classes,
                mode = "everything",
                positive='1')
Testing accuracy
preds = predict(rf_model,test_set, type="raw")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive='1')

BOOSTING

In boosting, multiple models are trained sequentially and each model learns from the errors of its predecessors. We will use the Stochastic Gradient Boosting algorithm.

Stochastic Gradient Boosting
Training the model
Training accuracy
preds = predict(SGB,train_set, type="raw")
confusionMatrix(preds, train_set$Classes,
                mode = "everything",
                positive='1')
Testing accuracy
preds = predict(SGB,test_set, type="raw")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive='1')

SVM

Support Vector Machine is a discriminative classifier that classifies obserations using a hyperplane that best differentiates between the classes. Its advantages lay in the fact that they are very flexible and work well on high-dimensional data.

We will use SVM on our dataset to demonstrate its capabilities.

Training the model

The goal of the SVM is to identify a boundary that minimizes the total distance between the hyper-plane and the closest points on each class.

There are two hyper-parameters to take into consideration before training our SVM model: first, the cost C which acts as a regularization parameter and trades off correct classifications of the training examples against the maximization of the decision boundary. In other words, the greater the value of C the higher the number of errors occurring in the training classifications. The second hyper-parameter gamma defines how much curvature we want in our decision boundary.

We start by tuning our model according to different values of gamma and C. We will start by using a linear kernel.

To use the cross validation functions from the Caret package, we need to turn the 0/1 categorical values of the variable Classes into “fire”/“not_fire” (as required). The functions provided will allow us to find the best values for both gamma and C. we used a tuning length of 10.

unique(train_set$Classes)
[1] not_fire fire    
Levels: not_fire fire

We perform repeated cross validation with

svm_model_linear
Support Vector Machines with Linear Kernel 

191 samples
  9 predictor
  2 classes: 'not_fire', 'fire' 

No pre-processing
Resampling: Cross-Validated (10 fold, repeated 5 times) 
Summary of sample sizes: 172, 172, 171, 172, 172, 172, ... 
Resampling results:

  ROC        Sens  Spec 
  0.9982444  0.98  0.952

Tuning parameter 'C' was held constant at a value of 1

As we can see above, the best values were found to be sigma = 0.2510562 and C = 16 for the radial kernel, and C = 1 for the linear kernel.

Using the radial kernel resulted in an AUC of 99% while the linear gave us an AUC of 99.8%.

NOTE: Even though the seed is set at 6, the values for C and sigma might change when trying to reproduce the experiment due to the dataset being small nd relatively non-complex. Still, the values for the AUC and accuracy will be still very high.

We will then proceed to show the confusion matrix of the model.

confusionMatrix(preds_svm_train_linear,train_set$Classes, positive="fire")
Confusion Matrix and Statistics

          Reference
Prediction not_fire fire
  not_fire       91    2
  fire            0   98
                                          
               Accuracy : 0.9895          
                 95% CI : (0.9627, 0.9987)
    No Information Rate : 0.5236          
    P-Value [Acc > NIR] : <2e-16          
                                          
                  Kappa : 0.979           
                                          
 Mcnemar's Test P-Value : 0.4795          
                                          
            Sensitivity : 0.9800          
            Specificity : 1.0000          
         Pos Pred Value : 1.0000          
         Neg Pred Value : 0.9785          
             Prevalence : 0.5236          
         Detection Rate : 0.5131          
   Detection Prevalence : 0.5131          
      Balanced Accuracy : 0.9900          
                                          
       'Positive' Class : fire            
                                          

For the radial kernel: our model gave us an accuracy of 100% on the training set and 96.15% on the test set.

For the linear kernel: the model gave an accuracy of 98.95% on the train set and performed worse than the radial kernel, which is expected since it is less flexible and fits the data less well. On the test set however, it gave us the same accuracy.

Overall, since our focus would be to correctly classify fires, we will go with the radial kernel since it gave us a fewer number of false negatives than the linear one.

Plotting the ROC curve

We get an AUC value of 0.953 using the SVM classifier.

---
title: <center>Algerian Forest Fire Analysis</center><br/>
output: html_notebook
author:
  - Mohamed Rissal Hedna 201906233
  - Younes Djemmal 201906303
---
## Overview
### Background

During the summer of 2012, [wild fires](https://www.un-spider.org/news-and-events/news/algeria-maps-summer-2012-wildfires-available) ravaged throughout the Algerian territory covering most of the northern part, especially the coastal cities. This disaster was due to the higher than average temperatures which reached as high as 50 degrees Celcius.

### Objectives 

One important measure against the reproduction of such disasters is the ability to predict their occurrence. Moreover, in this project, we will attempt to predict these forest fires based on multiple features related to weather indices.

### Dataset Description

The Dataset we will use to train and test our models consists of 244 observations on two Algerian Wilayas (cities): Sidi-Bel Abbes and Bejaia. The observations have been gathered throughout the duration of 4 months from June to September 2012 for both cities.

**The Dataset contains the following variables:**

1. Date: (DD/MM/YYYY) Day, month ('june' to 'september'), year (2012)
2. Temp: temperature noon (temperature max) in Celsius degrees: 22 to 42
3. RH: Relative Humidity in %: 21 to 90
4. Ws: Wind speed in km/h: 6 to 29
5. Rain: total day in mm: 0 to 16.8<br>
**FWI Components (check this [LINK](https://cwfis.cfs.nrcan.gc.ca/background/summary/fwi) for more information)**
6. Fine Fuel Moisture Code (FFMC) index from the FWI system: 28.6 to 92.5
7. Duff Moisture Code (DMC) index from the FWI system: 1.1 to 65.9
8. Drought Code (DC) index from the FWI system: 7 to 220.4
9. Initial Spread Index (ISI) index from the FWI system: 0 to 18.5
10. Build-up Index (BUI) index from the FWI system: 1.1 to 68
11. Fire Weather Index (FWI) Index: 0 to 31.1
12. Classes: two classes, namely "fire" and "not fire"

## Exploratory Data Analysis

We first start off by importing the necessary libraries for our analysis.

[INSERT DESCRIPTION OF EACH LIBRARY]

```{r, include=FALSE}
library(GA)
library(e1071)
library(MASS)
library(dplyr)
library(vtable)
library(plyr)
library(ggplot2)
library(ggcorrplot)
library(plotly)
library(tidyverse)
#Feature selection libraries
library(mlbench)
library(caret)
#For Logistic regression
library(caTools)
#For ROC curve
library(ROCR)
```


### Importing the data 

The Dataset provided to us was in the form of a .csv file that contained two tables, one table for the observations belonging to the Sidi-Bel Abbes region, and the other for Bejaia. 

Before starting our analysis we separated the tables into two distinct files according to the region. We named both files *Algerian_forest_fires_dataset_Bejaia.csv* and *Algerian_forest_fires_dataset_Sidi_Bel_Abbes.csv* for Bejaia and Sidi-Bel Abbes respectively.


```{r, echo=FALSE}
df_b <- read.csv("./Algerian_forest_fires_dataset_Bejaia.csv")
df_s <- read.csv("./Algerian_forest_fires_dataset_Sidi_Bel_Abbes.csv")
```

### Cleaning and processing the data

We first check the existence of null values in the Dataset, none were found.


```{r}
colSums(is.na(df_b))
colSums(is.na(df_s))
```

We then process to add a column in both datasets to indicate the region(Wilaya) in each table. We chose the following encoding:

1. Bejaia = 0
2. Sidi-Bel Abbes = 1


```{r}
df_b[["Region"]] = 0
df_s[["Region"]] = 1
```

After that, we proceed to merge both our datasets into one single dataframe using *full_join()*, this will allow us to easily explore and analyze the data.

```{r}
df_s$DC <- as.double(df_s$DC)
df_s$FWI <- as.double(df_s$FWI)

df = full_join(df_s, df_b)

dim(df)
str(df)
```

```{r}
summary(df)
unique(df$year)
unique(df$month)
```

We check again for any *NA* values that might have been introduced into the dataset by merging the data from both tables, we found out there was one row that contained NA value in DC and FWI. We delete that row since it will not affect our overall dataset.

```{r}
colSums(is.na(df))
df = df %>% drop_na(DC)
dim(df)
```

We now proceed to display the different range of values some categorical variables might contain, mainly the Classes and the Region columns.


```{r}
unique(df$Classes)
unique(df$Region)
```

We find that the Classes column has values that contain unneeded space characters, we proceed to trim those spaces.

```{r}
df$Classes <- trimws(df$Classes, which = c("both"))
```



```{r}
unique(df$Classes)
df = df %>% drop_na(Classes)
```

```{r}
df$Classes <- mapvalues(df$Classes, from=c("not fire","fire"), to=c(0,1))
```

```{r}
unique(df$Classes)
df$Classes <- as.numeric(df$Classes)
st(df)
```

```{r}
df <- df[-c(3)]

df_scaled = df
df_scaled[-c(1,2,13,14)] <- scale(df[-c(1,2,13,14)])
st(df_scaled)

```

### Visualizing the data

We have ended up with a clean and scaled dataframe named *df_scaled*, which we will use to visualize and further explore our data.

Our first instinct is to compare the two regions together in terms of number of fires, and average temperature. 

```{r}
aggregate(df$Classes ~ df$Region, FUN = sum)
aggregate(df$Temperature ~ df$Region, FUN = mean)
```
We used the unscaled dataset to plot the real life values of the temperatures.

```{r}
df %>%
  group_by(Region) %>%
  summarise(Region = Region, Number_of_fires = sum(Classes), Temperature = mean(Temperature)) %>%
  ggplot(aes(x=Region, y=Number_of_fires, fill = Temperature))+
  geom_col(position='dodge')
```

We can see that the the Sidi-Bel Abbes region has in total a greater number of fires and a higher average temperature throughout the summer of 2012.

## Further Analysis
### Correlation Matrix

The previous results push us to suspect a positive relationship between the temperature and the likelihood of having a fire. However, we need to investigate all the other variables, which is why we will plot a correlation matrix of the features in the dataset.

```{r}
corr_mat <- round(cor(df_scaled),2)
p_mat <- cor_pmat(df_scaled)
 
corr_mat <- ggcorrplot(
  corr_mat, 
  hc.order = FALSE, 
  type = "upper",
  outline.col = "white",
)
 
ggplotly(corr_mat)
```

### Feature Selection

We performed feature selection using the Caret package to determine which features are the most important and which are the least. 

In this case, we opted for Linear Discriminant Analysis with Stepwise Feature Selection by specifying *stepLDA* as our method.

The *varImp* function returns a measure of importance out of 100 for each of the features. According to the official [Caret documentation](https://topepo.github.io/caret/variable-importance.html), the importance metric is calculated by conducting a ROC curve analysis on each predictor; a series of cutoffs is applied to the predictor data to predict the class. The AUC is then computed and is used as a measure of variable importance. 


```{r}
# prepare training scheme
df_scaled$Classes = as.factor(df_scaled$Classes)

control <- trainControl(method="repeatedcv", number=10, repeats=3)
# train the model
modelLDA <- train(Classes~., data=df_scaled, method="stepLDA", trControl=control)

importanceLDA <- varImp(modelLDA, scale=FALSE)

plot(importanceLDA)
```
We can see that the variables *month*, *Ws*, *Region*, and *day* are insignificant compared to other features. We will disregard them in our model.



## Model Building

For the following models, we will only use the features that were the most significant in our feature selection phase. The selected features are:

1. Temperature
2. Rain
3. FFMC
4. DMC
5. DC
6. ISI
7. BUI
8. FWI
9. RH

### Splitting the dataset

We begin by splitting the data into train/test sets with a 80/20 split. This split was chosen by default as a good practice. This will leave us with 191 observations in the training set as well as 52 in the test set. Due to the small nature of the dataset at hand we will later apply cross validation in order to further examine the performance of our models and compare them with each other.


```{r}

set.seed(6)
split <- sample.split(df_scaled, SplitRatio=0.8)

train_set <- subset(df_scaled, split == "TRUE")
test_set <- subset(df_scaled, split=="FALSE")

dim(train_set)
dim(test_set)

```

### Logistic Regression

Logistic Regression is considered to be an extension of Linear Regression, in which we predict the qualitative response for an observation. It gives us the probability of a certain observation belonging to a class in binomial classification, but can also be extended to be used for multiple classifications.


#### Training the model

We first start by fitting our model on the training set. 

```{r}
logistic_model <- glm(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set, family="binomial")

logistic_model
```

[Interpretation on the coefficients]

#### Testing the model

Since logistic regression gives us the probability of each observation belonging to the 1 class, we will use a 0.5 threshold to transform that probability into a classification of either 0 or 1.

After getting our predictions, we will use the confusion matrix function from the caret library that computes a set of performance matrices including f1-score, recall and precision. Other matrices computed include: sensitivity, specificity, prevalence etc. The official documentation for this function and the formulas for all matrices are found in this [link.](https://rdrr.io/cran/caret/man/confusionMatrix.html) We will only be interested in the f1-score, recall, precision, accuracy and balanced accuracy.

##### On the train set


```{r}
preds_logistic <- predict(logistic_model, train_set, type="response")

preds_logistic <- ifelse(preds_logistic >0.5,1,0)
preds_logistic <- as.factor(preds_logistic)

confusionMatrix(preds_logistic, train_set$Classes,
                mode = "everything",
                positive="1")
```


##### On the test set

```{r}
preds_logistic <- predict(logistic_model, test_set, type="response")

preds_logistic <- ifelse(preds_logistic >0.5,1,0)
preds_logistic <- as.factor(preds_logistic)

confusionMatrix(preds_logistic, test_set$Classes,
                mode = "everything",
                positive="1")
```

#### Plotting the ROC curve

As we plot the ROC curve, we can see that the AUC is equal to 0.986 which is almost a perfect classifier. Due to the small size of the dataset we have at hand we cannot make any conclusions yet until we try out the different other statistical learning methods.

```{r}
ROCPred <- prediction(as.numeric(preds_logistic), test_set$Classes)

ROCPer <- performance(ROCPred, measure="tpr",x.measure="fpr")
auc <- performance(ROCPred, measure = "auc")
auc <- auc@y.values[[1]]
auc
plot(ROCPer, colorize = TRUE)
```

#### Observations

Our statistics show the following:

1. On the training set: Accuracy is 100%, F1 score is 100%, with one False Negative and no False Positives.
2. On the testing set: Accuracy is 98%, F1 score is 98.6%, with one False Negative and no False Positives.

### LDA

Linear Discriminant Analysis is best used when the decision boundary of our given dataset is assumed to be linear. There are two basic assumptions that LDA takes into consideration:

1. There is a common variance across all response classes
2. The distribution of observations in each response class is normal with a **class-specific** mean, and a **common** variance


Since LDA assumes that each input variable has the same variance, we will use the standardized data-frame in the train test splits. Each variable in the standardized data-frame has mean of 0 and variance of 1.

#### Training the model


```{r}
lda_model = lda(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set, family="binomial")
lda_model
```


[Interpretation on the coefficients]

#### Testing the model

##### On the train set

On our trainng data, the model reached an accuracy of 94.2% and an F1 score of 94.4%. 

```{r}
preds_lda = predict(lda_model,train_set, type="response")
confusionMatrix(preds_lda$class, train_set$Classes,
                mode = "everything",
                positive="1")
```

##### On the test set

As we can see below, our the number of false positives is 0, and the number of false negatives is 1. The results are very good but the other way around would have been better as we do not want to miss any positives meaning we want to predict all fires. Our model yielded an f1-score of 98.6% and an accuracy of 98%.

```{r}
preds_lda = predict(lda_model,test_set, type="response")
confusionMatrix(preds_lda$class, test_set$Classes,
                mode = "everything",
                positive="1")
```
#### Potting the ROC curve

The AUC for LDA was also 0.986, similar to the one for Logistic Regression.

```{r}
ROCPred <- prediction(as.numeric(preds_lda$class), test_set$Classes)
ROCPer <- performance(ROCPred, measure="tpr",x.measure="fpr")
auc <- performance(ROCPred, measure = "auc")
auc <- auc@y.values[[1]]
auc
plot(ROCPer, colorize = TRUE)
```

[INTERPRETATION]

### QDA

Quadratic Discriminant Analysis is best used when the decision boundary of our given dataset is assumed to be non-linear. Similarly to LDA, QDA makes two basic assumptions: 

1. There is a different covariance for each of the response classes
2. The distribution of observations in each response class is normal with a **class-specific** mean, and a **class-specific** covariance


#### Training the model


```{r}
qda_model = qda(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set)
qda_model
```

[Interpretation on the coefficients]

#### Testing the model

##### On the train set

Our model yields an accuracy of 97.9% and an F1 score of 98% on the training set.

```{r}
preds_qda = predict(qda_model,train_set, type="response")
confusionMatrix(preds$class, train_set$Classes,
                mode = "everything",
                positive="1")
```

##### On the test set

As we can see below, our the number of false positives is 1, and the number of false negatives is 1. The results are very good but the other way around would have been better as we do not want to miss any positives meaning we want to predict all fires. Our model yielded an f1-score of 97.3% and an accuracy of 96%.  

```{r}
preds_qda = predict(qda_model,test_set, type="response")
confusionMatrix(preds_qda$class, test_set$Classes,
                mode = "everything",
                positive="1")
```

#### Potting the ROC curve

After plotting the ROC curve we got an AUC of 0.953, which is a bit worse than the one in Logistic Regression and LDA.

```{r}

ROCPred <- prediction(as.numeric(preds_qda$class), test_set$Classes)
ROCPer <- performance(ROCPred, measure="tpr",x.measure="fpr")
auc <- performance(ROCPred, measure = "auc")
auc <- auc@y.values[[1]]
auc
plot(ROCPer, colorize = TRUE)
```


### KNN Classifier
In this section, we will explore KNN's performance on our problem. We will use hyper parameter tuning to determine the best number of nearest numbers (K) and we will also use repeated cross validation in our training for better performance estimation. 

Since KNN is a distance based model, we will here again use our normalized dataset instead of the original. Once again the scaling technique used was the standard scaling using the function scale(). 

#### Training the model

##### Setting up the Cross-Validation for Hyperparameter tuning

The summaryFunction argument determines which metric to use to determine the performance of a particular hyperparameter setting. Here we shall use defaultSummary which calculates accuracy and kappa statistic. 

We have opted to go with the repeated 10 fold cross-validation method repeated 10 times. ClassProbs parameter is set to TRUE and we can set the threshold later when we test our model performance.

```{r}
training_control <- trainControl(method = "repeatedcv",
                                 summaryFunction = defaultSummary,
                                 classProbs = TRUE,
                                 number = 10,
                                 repeats = 10)
```

##### Training with Cross-validation

Now we use the train() function to perform the model training/tuning of the k hyper-parameter.
The range of k is from 3 to 85 in steps of 2 meaning we will only have odd values of k only as it is best practice for the KNN clustering.

Another tweak that we need to make on our data-set is to change our target variable values to valid R variable names in order for the KNN algorithm to work with class Probabilities as each values of our target variable will become a variable with its own probability values. Leaving the values as {0,1} will throw an error at us, therefore we will set our Classes variable values back to 'Fire' and 'Not_Fire' and proceed. 

```{r}
train_set$Classes <- mapvalues(train_set$Classes, from=c(0,1), to=c("not_fire","fire"))
test_set$Classes <- mapvalues(test_set$Classes, from=c(0,1), to=c("not_fire","fire"))

```

```{r}
knn_cv <- train(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, 
                data = train_set,
                method = "knn",
                trControl = training_control,
                metric = "Accuracy",
                tuneGrid = data.frame(k = seq(3, 85, by = 2)))
knn_cv
```

##### Distribution of predicted probabilites -threshold inspection-

Inspecting the probabilities reveals that a cutoff probability around 0.5 give good classification results.
```{r}
preds = predict(knn_cv,train_set, type = "prob")
train_set %>%
  ggplot() +
  aes(x = preds$fire, fill = Classes) +
  geom_histogram(bins = 20) +
  labs(x = "Probability", y = "Count", title = "Distribution of predicted probabilities for value fire" )
```

##### Plotting the training ROC curve

```{r}
pROC_train <- roc(response = train_set$Classes, predictor = preds[,"fire"],
                quiet = TRUE,
                plot = TRUE, 
                percent = TRUE,
                auc.polygon = TRUE, 
                print.auc = TRUE, 
                print.thres = TRUE,
                print.thres.best.method = "youden")
```

#### Testing the model

```{r}
preds = predict(knn_cv,test_set, type="raw")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive="fire")
```

#### Potting the ROC curve on Test results

```{r}
preds <- as.vector(preds, mode = "numeric")
test_set$Classes <- as.vector(test_set$Classes, mode = "numeric")

pred <- prediction(preds, test_set$Classes) 

perf <- performance(pred,"tpr","fpr")
plot(perf,colorize=TRUE)
```
 
### Ensemble Methods / Tree based methods

#### Simple Decision Trees and tree pruning

The goal of ensemble modeling is to improve performance over a baseline model by combining multiple models. So, we will set the baseline performance measure by starting with one algorithm. In our case, we will build a simple decision tree.

Decision trees are widely used classifiers in industries based on their transparency in describing rules that lead to a prediction. They are arranged in a hierarchical tree-like structure and are simple to understand and interpret. They are not susceptible to outliers and are able to capture nonlinear relationships.

We will be using the rpart library for creating decision trees. rpart stands for recursive partitioning and employs the CART (classification and regression trees) algorithm. Apart from the rpart library, there are many other decision tree libraries like C50, Party, Tree, and mapTree.

```{r, results='hide'}
library(rpart)
library(rpart.plot)
```


##### Training the model with Pruning
Next, we create a decision tree model by calling the rpart function. Let's first create a base model with default parameters and value. 

Prepruning is also known as early stopping criteria. As the name suggests, the criteria are set as parameter values while building the rpart model. Below are some of the pre-pruning criteria that can be used. The tree stops growing when it meets any of these pre-pruning criteria, or it discovers the pure classes.

The complexity parameter (cp) in rpart is the minimum improvement in the model needed at each node. It’s based on the cost complexity of the model and works as follows: 

- For the given tree, add up the misclassification at every terminal node.
- Then multiply the number of splits time a penalty term (lambda) and add it to the total misclassification.
- The lambda is determined through cross-validation and not reported in R.
- The cp we see using printcp() is the scaled version of lambda over the misclassifcation rate of the overall data.

The cp value is a stopping parameter. It helps speed up the search for splits because it can identify splits that don’t meet this criteria and prune them before going too far.

Other parameters include but are not limited to: 

- maxdepth: This parameter is used to set the maximum depth of a tree. In this prepruning step.

- minsplit: It is the minimum number of records that must exist in a node for a split to happen or be attempted.

And one last thing, since we are in a classification setting, we have to specify class as the method used for building our tree instead of 'anova' that is used in regression settings. 

```{r}
base_model <- rpart(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data = train_set, method = "class",  control = rpart.control(cp = 0, maxdepth = 8, minsplit = 8))
summary(base_model)
#Plot Decision Tree
rpart.plot(base_model)
```

The summary of our base model will give us the details of each split with the number of observations, the value of the complexity parameter, the predicted class, the class counts with their probabilites and the children of the node. It will also give details about the future splits starting with the primary splits that will follow and the percent improvement in the prediction as well as the surrogate splits that come later on. 

The resulting tree as explained in the above section, is the smallest tree with the lowest miss-classification loss. This tree is plotted with the split details and leaf node classes. 

The optimal CP value found was 0.96341463.

```{r}
preds = predict(base_model,train_set, type="class")
confusionMatrix(preds, train_set$Classes,
                mode = "everything",
                positive='1')
```
The train accuracy of our tree is 99% with an F1 score of 99% as well with a total of 2 misclassifications, 1 FP and 1 FN.  

##### Testing the model

We have a 100% accuracy on our held-out validation set which means we have successfully avoided overfitting using tree pruning. 

```{r}
preds = predict(base_model,test_set, type="class")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive='1')
```

##### ROC curve

```{r}
preds <- as.vector(preds, mode = "numeric")
test_set$Classes <- as.vector(test_set$Classes, mode = "numeric")

pred <- prediction(preds, test_set$Classes) 

perf <- performance(pred,"tpr","fpr")
plot(perf,colorize=TRUE)
```

#### BAGGING

Bagging, or bootstrap aggregation, is an ensemble method that involves training the same algorithm many times by using different subsets sampled from the training data. The final output prediction is then averaged across the predictions of all the sub-models. The two most popular bagging ensemble techniques are Bagged Decision Trees and Random Forest.

##### Bagged Decision Trees

This method performs best with algorithms that have high variance. The argument method="treebag" specifies the algorithm. We will train our model using a 5-fold cross validation repeated 5 times. The sampling strategy used for the bagged trees is ROSE.

###### Training the model
```{r}
control <- trainControl(method="repeatedcv", number=5, repeats=5)

bagCART_model <- train(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set, method="treebag", metric="Accuracy", trControl=control)
```
###### Training accuracy
```{r}
preds = predict(bagCART_model,train_set, type="raw")
confusionMatrix(preds, train_set$Classes,
                mode = "everything",
                positive='1')
```
###### Testing accuracy
```{r}
preds = predict(bagCART_model,test_set, type="raw")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive='1')
```
##### Random Forests
Random Forest is an extension of bagged decision trees, where in addition to sampling the data, we also sample the variables in each bagged decision tree. The trees are constructed with the objective of reducing the correlation between the individual decision trees by making sure we do not use the same strong predictors in all bagged trees resulting in strongly correlated trees.

###### Training the model
```{r}
control <- trainControl(method="repeatedcv", number=5, repeats=5)

rf_model <- train(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set, method="rf", metric="Accuracy", trControl=control)
```
###### Training accuracy
```{r}
preds = predict(rf_model,train_set, type="raw")
confusionMatrix(preds, train_set$Classes,
                mode = "everything",
                positive='1')
```
###### Testing accuracy
```{r}
preds = predict(rf_model,test_set, type="raw")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive='1')
```

#### BOOSTING
In boosting, multiple models are trained sequentially and each model learns from the errors of its predecessors. We will use the Stochastic Gradient Boosting algorithm.

##### Stochastic Gradient Boosting
###### Training the model
```{r, results="hide"}
control <- trainControl(method="repeatedcv", number=5)

SGB <- train(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data=train_set, method="gbm", metric="Accuracy", trControl=control)
```
###### Training accuracy
```{r}
preds = predict(SGB,train_set, type="raw")
confusionMatrix(preds, train_set$Classes,
                mode = "everything",
                positive='1')
```
###### Testing accuracy
```{r}
preds = predict(SGB,test_set, type="raw")
confusionMatrix(preds, test_set$Classes,
                mode = "everything",
                positive='1')
```
### SVM

Support Vector Machine is a discriminative classifier that classifies obserations using a hyperplane that best differentiates between the classes. Its advantages lay in the fact that they are very flexible and work well on high-dimensional data.

We will use SVM on our dataset to demonstrate its capabilities.

#### Training the model

The goal of the SVM is to identify a boundary that minimizes the total distance between the hyper-plane and the closest points on each class.

There are two hyper-parameters to take into consideration before training our SVM model: first, the cost *C* which acts as a regularization parameter and trades off correct classifications of the training examples against the maximization of the decision boundary. In other words, the greater the value of *C* the higher the number of errors occurring in the training classifications. The second hyper-parameter *gamma* defines how much curvature we want in our decision boundary. 

We start by tuning our model according to different values of *gamma* and *C*. We will start by using a linear kernel.

To use the cross validation functions from the [Caret](https://topepo.github.io/caret/train-models-by-tag.html#l1-regularization) package, we need to turn the 0/1 categorical values of the variable *Classes* into "fire"/"not_fire" (as required). The functions provided will allow us to find the best values for both *gamma* and *C*. we used a tuning length of 10.

```{r}
train_set$Classes <- mapvalues(train_set$Classes, from=c(0,1), to=c("not_fire","fire"))
test_set$Classes <- mapvalues(test_set$Classes, from=c(0,1), to=c("not_fire","fire"))
```

We perform repeated cross validation with 


```{r}
control = trainControl(method = "repeatedcv", repeats=5, summaryFunction=twoClassSummary, classProbs=TRUE)

svm_model_radial <- train(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data =train_set, method="svmRadial", tuneLength=10, metric="ROC", trControl=control)

svm_model_linear <- train(Classes ~ Temperature+Rain+FFMC+DMC+DC+ISI+BUI+FWI+RH, data =train_set, method="svmLinear", tuneLength=10, metric="ROC", trControl=control)

svm_model_radial
svm_model_linear
```

As we can see above, the best values were found to be *sigma* = 0.2510562 and *C* = 16 for the radial kernel, and *C* = 1 for the linear kernel.

Using the radial kernel resulted in an *AUC* of 99% while the linear gave us an *AUC* of 99.8%.

NOTE: Even though the seed is set at 6, the values for *C* and *sigma* might change when trying to reproduce the experiment due to the dataset being small nd relatively non-complex. Still, the values for the AUC and accuracy will be still very high.

We will then proceed to show the confusion matrix of the model.

```{r}
preds_svm_train_linear <- predict(svm_model_linear, train_set)
preds_svm_train_radial <- predict(svm_model_radial, train_set)
  
confusionMatrix(preds_svm_train_linear,train_set$Classes, positive="fire")
confusionMatrix(preds_svm_train_radial,train_set$Classes, positive="fire")

preds_svm_test_linear <- predict(svm_model_linear, test_set)
preds_svm_test_radial <- predict(svm_model_radial, test_set)

confusionMatrix(preds_svm_test_linear,test_set$Classes, positive="fire")
confusionMatrix(preds_svm_test_radial,test_set$Classes, positive="fire")
```

For the radial kernel: our model gave us an accuracy of 100% on the training set and 96.15% on the test set.

For the linear kernel: the model gave an accuracy of 98.95% on the train set and performed worse than the radial kernel, which is expected since it is less flexible and fits the data less well. On the test set however, it gave us the same accuracy. 

Overall, since our focus would be to correctly classify fires, we will go with the radial kernel since it gave us a fewer number of false negatives than the linear one.

#### Plotting the ROC curve

```{r}

ROCPred <- prediction(as.numeric(preds_svm_test), as.numeric(test_set$Classes))
ROCPer <- performance(ROCPred, measure="tpr",x.measure="fpr")
auc <- performance(ROCPred, measure = "auc")
auc <- auc@y.values[[1]]
auc
plot(ROCPer, colorize = TRUE)
```

We get an AUC value of 0.953 using the SVM classifier.




